专利摘要:
DISPLAY DEVICE AND CONTROL METHODA new and improved display device and control method is provided, in which the state of an image display device can be optimized for a user in an arbitrary position. A display device specifically provided with: an image capture unit that captures video images within a prescribed range in the image display direction, an image analysis unit that analyzes video images captured by the capture unit is specifically disclosed previously mentioned image, and calculates the user's position, a system optimization processing unit that calculates system control information to optimize the system, based on the aforementioned user position calculated by the aforementioned image analysis unit; and a system control unit that optimizes the system, based on the system control information previously calculated by the aforementioned system optimization processing unit.
公开号:BR112012005231A2
申请号:R112012005231-4
申请日:2010-07-22
公开日:2020-08-04
发明作者:Yuichi IIDA;Shinichi Hayashi;Yusuke Sakai;Junji Ooi;Daijiro Sakuma;Shingo Tsurumi
申请人:Sony Corporation.;
IPC主号:
专利说明:

"DISPLAY DEVICE AND CONTROL METHOD" Technical Field The present invention concerns a display device and a control method. 5 Fundamentals of the Technique In recent years, following an expansion of a flat screen television market, there is a demand for an image display device such as a large screen television to install in, for example, a living room is growing.
In such a situation, an image display device having several functions is being proposed.
Summary of the Invention Technical Problem Incidentally, since a user can view an image that an image display device is displaying at any desired position, depending on a user's viewing position, a state of the image display device, namely a audio property such as an audio output volume balance of the audio output section, an image property of an image display section of the image display device and display contents, a display direction of the display device and the like may not be optimal for use in the desired position.
Thus, the present invention has been made in view of the above problem, and an objective of the present invention is to provide a new and improved image display device and control method capable of optimizing the state of the image display device for the user in the desired position.
Solution to the Problem To solve the aforementioned problem, in accordance with an aspect of the present invention, a display device including an imaging section that takes an image from a predetermined range of a dynamic image with respect to a display direction of image, an image analysis section that analyzes the dynamic image obtained by the image forming section and calculates a 5 user position, a system optimization processing section that calculates system control information to optimize a system based on at the user's position calculated by the image analysis section, and a system control section that optimizes the system based on the system control information calculated by the system optimization processing section are provided.
The system optimization processing section can calculate system control information to optimize a sound output volume balance from an audio output section based on the user's position calculated by the image analysis section.
The system optimization processing section can calculate system control information to optimize an image property of an image display section based on the user's position calculated by the image analysis section.
The system optimization processing section can calculate system control information to optimize display content for an image display section based on the user's position calculated by the image analysis section.
The system optimization processing section can calculate system control information to optimize a device direction of a device itself based on the user's position calculated by the image analysis section.
The image analysis section can analyze the dynamic image obtained by the image formation section and calculates a three-dimensional position of the user.
The image analysis section can analyze the dynamic image obtained by the image formation section and calculate respective positions for a plurality of users, and the system optimization processing section can calculate balance center positions for the plurality of users with 5 based on the positions of the plurality of users calculated by the image analysis section, and calculates system control information to optimize a system based on the calculated positions of the center of the ballad of the plurality of users.
In addition, to solve the above problem, according to another aspect of the present invention, a control method including an image formation step of taking an image from a predetermined range of a dynamic image with respect to an image display direction; an image analysis step to analyze the obtained dynamic image and calculate a user's position; a system optimization processing step of calculating system control information to optimize a vase system at the user's calculated position; and a system control step to optimize the system based on the calculated system control information is provided.
Advantageous Effects of the Invention As explained above, according to the present invention, a new and improved image display device and control method capable of optimizing the state of the image display device for the user in the desired position can be provided.
Brief description of the Drawings Fig. 1 is an explanatory diagram explaining an external appearance of an image display device 100 in an embodiment of the present invention.
Fig. 2 is an explanatory diagram explaining a configuration of the image display device 100 of the embodiment of the present invention.
Fig. 3 is an explanatory diagram that explains a configuration of a control section 110. Fig. 4 is an explanatory diagram that explains the configuration of a control section 110. Fig. 5 (A) is an explanatory diagram to explain a case where a user 1 and a user 2 are present in an imaging area of an imaging section 104, and Fig. 5 (B) is an explanatory diagram to explain a face detection position [a1, b1] and a face size [w1, h1] from user 1, and a face detection position [a2, b2] and a face size [w2, h2] from user 2 which are included in an image obtained by the training section Image 104. Fig. 6 (A) is an explanatory diagram to explain a case where users are present at a reference distance d0 and a distance d1 in the image formation area of the image formation section 104, Fig. 6 ( B) is an explanatory diagram to explain the size of the user's face [w1, h1] at a distance d1 in the image obtained by se image formation 104, and Fig. 6 (C) is an explanatory diagram to explain a reference face size [w0, h0] of the user at the reference distance d0 in the image obtained by the image formation section 104. Fig 7 (A) is an explanatory diagram to explain a device center [0, 0, 0] of the image display device 100 and a camera position [∆x, ∆y, ∆z] of the imaging section 104, and Fig. 7 (B) is an explanatory diagram to explain the center of the device [0, 0, 0] of the image display device 100, on a front axis [0, 0], the position of the camera [∆ x, ∆y, ∆z] of the image formation section 104, and an installation angle [∆ϕ, ∆θ].
Fig. 8 is a flow diagram showing an example of an optimization process according to a user's position by the image display device 100 of the embodiment of the present invention.
Fig. 9 is a flow diagram showing an example of an optimization process according to the positions of one or more users by the image display device 100 of the embodiment of the present invention.
Fig. 10 is a flow diagram showing an example of an optimization process according to the age of one or more users by the image display device 100 of the embodiment of the present invention.
Fig. 11 is an explanatory diagram to explain a process for optimizing a volume balance.
Fig. 12 is an explanatory diagram to explain a method of calculating balance sheet center positions for a group of users.
Fig. 13 (A) is an explanatory diagram to explain a position [D0, Ah0] of a user A and a position [D1, Ah1] of a user B in a horizontal direction, and Fig. 13 (B) is a diagram explanatory to explain a position [D0, Av0] of user A and a position [D, Av1] of user B in a vertical direction.
Fig. 14 (A) and Fig. 14 (B) are explanatory diagrams to explain a process for optimizing a GUI read size.
Fig. 15 (A) to Fig. 15 (C) are explanatory diagrams to explain a method for correcting the reference face size [w0, h0] at the reference distance d0 in a user distance calculation.
Description of the Modes In the following, preferred embodiments of the present invention will be described in detail with reference to the drawings that follow.
Note that, in this specification and in the drawings, elements that have substantially the same function and structure are denoted with the same reference signs, and repeated explanation is omitted.
Note that, explanation will be given in the following order. <1. One Modality of the Present Invention> [1-1. Image Display Device Configuration] [1-2. Control Section Configuration] 5 [1-3. Optimization Process according to User Position] [1-4. Optimization Process according to User Positions of One or More Users] [1-5. Optimization Process according to Ages of One or More Users] <1. One Modality of the Present Invention> [1-1. Configuration of the Image Display Device] First, a configuration of an image display device of one embodiment of the present invention will be explained.
Fig. 1 is an explanatory diagram explaining an external appearance of an image display device 100 in the embodiment of the present invention.
Fig. 1 is a front view diagram in the view of the image display device 100 from a front side.
Hereinafter, the external appearance of the image display device 100 of the embodiment of the present invention will be explained with reference to Fig. 1. As shown in Fig. 1, the image display device 100 of the embodiment of the present invention includes forming sections. image 104 taking a dynamic image arranged in an upper center position and left and right center positions of a display panel section 102 that displays a still image or a dynamic image.
The imaging sections 104 obtained from the dynamic image with respect to a direction in which the image display device 100 displays the still image or the dynamic image in the display panel section 102. The image display device 100 of the mode analyzes an image obtained by the imaging sections 104, and detects a user's face obtained from the image.
The image display device 100 analyzes the detected face of the user, and detects a face detection position and a face size.
The image display device 5 100 calculates a relative position of the user with respect to an optical axis of a camera from the respective image forming sections 104 based on the detected face detection position and face size.
Thus, the image display device 100 calculates a user position with respect to a device center and a front axis of the image display device 100 based on a result of calculating the relative position with respect to the camera's optical axis. respective image forming sections 104 and accompanying information such as a position, camera angle and respective image forming sections 104. The image display device 100 of the embodiment characteristically optimizes a state of the image display device 100 of according to the user's position; that is, an audio property such as volume, an image property, display contents, a device direction with respect to the user's position.
In addition, the image display device 100 of the mode includes a sensor section 106 arranged in a lower center position of the display panel section 102. The sensor section 106 detects the presence / absence of a person in front of the display device. image 100. Note that, in Fig. 1, although the image display device 100 has image forming sections 104 that take the dynamic image in three portions around the display panel section 102, it goes without saying that positions of the imaging sections 104 taking the dynamic image are not limited to the examples9s above in the present invention, for example, a device independent of the image display device 100 can be provided, the device can be connected to the image display device 100, and the dynamic image can be obtained by this device.
Additionally, it goes without saying that the number of the imaging section 104 is not limited to three; 5 images can be obtained by providing two or less, or four or more image forming sections 104. Furthermore, it goes without saying that a number of sensor section 106 is not limited to one, and two or more sensor sections can be provided.
Additionally, although not shown in Fig. 1, the image display device 100 can still include a signal receiving section that can receive control signal from a remote controller (not shown) via an infrared scheme, a scheme without thread.
As above, the external appearance of the image display device 100 of the embodiment of the present invention has been explained with reference to Fig. 1. In the following, a configuration of the image display device 100 of the embodiment of the present invention will be explained.
Fig. 2 is an explanatory diagram explaining the configuration of the image display device 100 of the embodiment of the present invention.
Hereinafter, the configuration of the image display device 100 of the embodiment of the present invention will be explained with reference to Fig. 2. As shown in Fig. 2, the image display device 100 of the embodiment of the present invention is configured including the section of display panel 102, imaging sections 104, sensor section 106, speaker section 108, mechanism section 109, and control section 110. Additionally, control section 110 is configured by include an image input section 112, an image processing section 114, a display status analysis section 116, a user position information storage section 117, a display status recording section 118, a system optimization processing section 120, and a system control section 122. Display panel section 102 displays the still image or dynamic image based on a panel trigger signal. In mode 5, the display panel section 102 displays the still image or the dynamic image using liquid crystals. Of course, in the invention, it goes without saying that the display panel section 102 is not limited to the examples above. The display panel section 102 can display the still image or the dynamic image using a self-emitting display device such as an organic EL (electroluminescence). As described above, the imaging sections 104 are arranged in the top center position and in the center left and right positions of the display panel section 102 that displays the still image or the dynamic image. The imaging sections 104 obtained by the dynamic image with respect to the direction in which the image display device 100 displays the dynamic image in the display panel section 102 when the panel trigger signal is supplied to the display panel section 102 and the display panel section 102 is displaying the dynamic image. The imaging sections 104 can obtain the dynamic image by CCD (Load Coupled Device) image sensors, or they can obtain the dynamic image by CMOS (Complementary Metal Oxide Semiconductor) sensors. The dynamic image obtained by the imaging sections 104 is sent to the control section
110. The sensor section 106 is provided in a lower central position of the display panel section 102 which displays the still image or the dynamic image, and is for example to detect the presence / absence of a person in front of the display device image 100. Additionally, if a person is present in front of the image display device
100, sensor section 106 can detect a distance between the image display device 100 and that person. A result of detection and distance information by sensor section 106 is sent to control section 110. Speaker section 108 emits sounds based on a sound emitting signal 5. The mechanism section 109 controls a display direction of the display panel 102 of the image display device 100 for example based on a trigger signal. Control section 110 controls operations of the image display device. In the following, respective sections of control section 110 will be explained. The image input section 112 receives the dynamic image obtained in the image forming sections 104. The dynamic image received by the image input section 112 is sent to the image processing section 114, and is used in imaging processes in the section image processing section 114. Image processing section 114 is an example of an image analysis section of the present invention, and performs the respective image processes on the dynamic image that is obtained by the image forming sections and sent from the section image input
112. The imaging processes performed by the image processing section 114 include a detection process for an object included in the dynamic image obtained by the imaging sections 104, a detection process for a number of people included in the dynamic image, and a detection process for a face and facial expression included in the dynamic image. Results of the respective image processes by the image processing section 114 are sent to the viewing state analysis section 116, and are used for analysis of the presence / absence of a person watching the image display device 100, and a status of viewing position and a viewing position of the person.
As for the detection process to detect the face included in the image by the image processing section 114, techniques described for example in Japanese Patent Application Publication No. 2007-65766 and Japanese Patent Application Publication No. 2005-44330 can be used.
In the following, the face detection process will be briefly explained.
In order to detect the user's face of an image, firstly, a face position, a face size, and a face direction in the provided image are respectively detected.
By detecting the position and size of the face, a portion of the face image can be cut out of the image.
Thus, based on the cut of face image and face direction information, characteristic portions of the face (characteristic facial positions), for example, eyebrows, eyes, nose, mouth, and the like, are detected.
In detecting characteristic facial positions, for example, a method called an AAM (Active Appearance Models) can be adapted to be able to detect characteristic positions.
When the characteristic facial positions are detected, a local characteristic value is calculated for each of the facial characteristic positions.
By calculating the characteristic values and storing the local characteristic values calculated together with the face image, face recognition is made possible from an image obtained by the image formation section 104. As for the face recognition method, as long as techniques described by example in Japanese Patent Application Publication No. 2007-65766 and Japanese Patent Application Publication No. 2005-44330 can be used, so a detailed description of them will be omitted here.
Additionally, it is possible to determine whether the face obtained in the image provided is a man or a woman, and how old the person is, by the face image and characteristic face position.
In addition, by the recording information of predetermined faces, the person obtained in the provided image can be searched among the faces, and an individual can be specified.
The display state analysis section 116 is an example of an image analysis section of the present invention.
It receives a result of the respective image processes by the image processing section 114 and a result of detection and distance information by detecting the sensor section 106, and performs an analysis of the viewing status and viewing position of the person watching the image displayed by the image display device 100 using the result of the respective image processes by the image processing section 114 and the detection result and distance information by the sensor section 106. by the visualization analysis section 116 analyzing the viewing status and viewing position of the person watching, the image display device 100 can control the audio property such as a sound balance of a speaker section output 108, control of the image property of the section display panel 102, control of the display contents of the display panel section 102, and control of the display direction of the display section display panel 102 using mechanism section 109 according to the viewing position of the user watching the image display device 100. An analysis result of an analysis process by the viewing state analysis section 116 is sent to the section recording status display 118, the user position information storage section 117 and the system optimization process section 120. Note that in a case where a dynamic body is detected from the detection result and the information away by detecting sensor section 106 but a distance between sensor section 106 and that dynamic body is at or exceeds a predetermined distance, the viewing state analysis section 116 can exclude that object being a detection target.
The display state recording section 118 records the analysis result obtained by the analysis process by the display state analysis section 116. The analysis result in the display state analysis section 116 recorded by the state recording section visualization 118 is used in the system optimization process by the system optimization processing section 120. Additionally, the analysis result in the visualization status analysis section 116 recorded by the visualization state recording section 118 can be sent to an external data collection server 200. The user position information storage section 117 stores the analysis result of an analysis process by the display state analysis section 116. The system optimization processing section 120 calculates the system control information to carry out the system optimization process in the respective sections of the image display device 100 using the analysis result obtained by the analysis process of the viewing state analysis section 116. The system optimization process in the respective sections of the image display device 100 includes control of the audio property such as sound balance of the speaker section output 108, the control of the image property of the display panel section 102, the control of the display content of the display panel section 102, and the control of the display direction of the panel section of display 102 of the image display device 100 by the engine section 109. The image display device 100 can perform optimization process according to the user's position based on the system control information calculated by the optimization optimization section. system 120. The system control information calculated by the system optimization processing section 120 is sent to the system control section 122. The control section System Ole 122 performs the system optimization process 5 in the respective sections of the image display device 100 based on the system control information calculated by the system optimization processing section 120. For example, the system control section 122 performs the volume balance control of the speaker section output 108, the image property control of the display panel section 102, the control of the display content of the display panel section 102, the direction control of display panel section 102 of image display device 100 by mechanism section 100 and the like based on system control information calculated by system optimization processing section 120. As above, the configuration of the display device image display 100 of the modality of the present invention has been explained with reference to Fig. 2. Next, a configuration of the control section 119 included in the device. the image display 100 of the embodiment of the present invention will be explained in more detail. [1-2. Control Section Configuration] Fig. 3 is an explanatory diagram that explains the configuration of the control section 110 included in the image display device 100 of the modality of the present invention.
Within the control section 110, Fig. 3 explains a configuration of the display state analysis section 116 included in the control section 110. Next, the configuration of the display state analysis section 116 will be explained with reference to Fig. 3. As shown in Fig. 3, the viewing state analysis section 116 is configured including a user direction / distance calculation section 132 and user position information calculation section 134. The direction calculation section / user distance 132 receives the results of the respective image processes by the image processing section 114 and optical information such as information of a viewing angle and a resolution of each camera of the imaging sections 104, and calculates a position relative value of the user (direction [ϕ1, θ1], distance d1) with respect to the optical axis of each camera of the image formation sections 104 using the results of the respective image processes by the image processing 114 and optical information from the image forming sections 104. Here, the image obtained, as well as face detection information (for example, information such as face detection position [a1, b1], size face [w1, h1], and other attribute information such as image and gender) for each user using the image display device 100 is sent from the image processing section 114 to the direction / distance calculation section of user 132 of the visualization state analysis section 116. Fig. 5 (A) is an explanatory diagram to explain a case where a user 1 and a user 2 are present in the image formation area of the image formation section 104, and Fig. 5 (B) is a diagram, the explanatory to explain the face detection position [a1, b1] and face size [w1, h1] of user 1, and the face detection position [a2, b2 ] and face size [w2, h2] of user 2 that are included in the image obtained by the image formation section 1 04. Additionally, Fig. 6 (A) is an explanatory diagram to explain a case where users are present at a reference distance d0 and a distance d1 in the image formation area of the image formation sections 104, Fig. 6 ( B) is an explanatory diagram to explain the user's face size [w1, h1] at distance d1 in the image obtained by the image formation section 104, and Fig. 6 (C) is an explanatory diagram to explain a face size reference [w0, h0] at the reference distance d0 in the image obtained by the image formation section 104. The direction [ϕ1, θ1] is calculated as follows from the face detection position [a1, b1] normalized by the size of the image obtained [xmax, 5 ymax] and the viewing angle [ϕ0, θ0] of the camera of the image formation section 104. Horizontal direction: ϕ1 = ϕ0 * a1 Vertical direction: θ1 = θ0 * b1 Additionally, the distance d1 is calculated as follows from the reference face size [w0, h0] at the reference distance d0. Distance: d1 = d0 * (w0 / w1) The section for calculating user position information 134 receives a result of calculating the relative position of the user with respect to the optical axis of the camera from the respective imaging sections 104 by the section of user direction / distance calculation 132 and the attached information such as the position, the camera angle of the respective image forming sections 104, and calculates a three-dimensional position of the user with respect to the device center of the image display device 100 and the front axis using the result of calculating the relative position to the user by the user direction / distance calculation section 132 and the attached information from the respective image forming sections 104. The user position information calculated by the calculation section of the user user position information 134 is sent to the user position information storage section 117. Fig. 7 (A) is an explanatory diagram to explain the center of the device [0, 0, 0] of the image display device 100 and a camera position [Δx, Δy, Δz] of the image formation section 104, and Fig. 7 (B) is an explanatory diagram to explain the device center [0, 0 , 0] of the image display device 100, the front axis
[0, 0], the camera position [Δx, Δy, Δz] of the image formation section 104, and an installation angle [Δϕ, Δθ]. When the relative position of the user with respect to the optical axis of the camera of the imaging section 104 is of the direction [ϕ1, θ1] and distance d1, the center of the device of the image display device 100 is [0, 0, 0], and, with respect to the frontal eioxo [0, 0], a positional difference [Δx, Δy, Δz] is the camera position of the image formation section 104, an angular difference [Δϕ, Δθ] is the angle of installation, a position [x1, y1, z1] of the user with respect to the center of the device [0, 0, 0] of the image display device 100 is calculated as follows: x1 = d1 * cos (θ1-Δθ) * tan (ϕ1-Δϕ) - Δx y1 = d1 * tan (θ1-Δθ) - Δy z1 = d1 * cos (θ1-Δθ) * cos (ϕ1-Δϕ) - Δz Fig. 4 is an explanatory diagram that explains the configuration of the control section 110 included in the image display device 100 of the embodiment of the present invention.
Within control section 110, Fig. 4 explains settings for user position information storage section 117, system optimization processing section 120 and system control section 122 included in control section 110. Next , the settings of the user position information storage section 117, the system optimization processing section 120 and the system control section 122 will be explained with reference to Fig. 3. As shown in Fig. 4, the section system optimization processing section 120 is configured by including an audio property optimization processing section 142, an image property optimization processing section 144, and a device direction optimization processing section 146. Additionally, the system control section 122 is configured to include an audio property control section 152, an image property control section154, and a cont section device direction scroll 156. User position information storage section 117 stores user position information that is the result of 5 calculating user position with respect to the center of the device and the front axis of the image display device 100 by the user position information calculation section 134 of the viewing status analysis section 116. The user position information stored in the user position information storage section 117 is sent to the system optimization processing section 120. The audio property optimization processing section 142 of the system optimization processing section 120 calculates audio property control information to perform an audio property optimization process on speaker section 108 of the audio device. image display 100 based on user position information sent from position information storage section 117, in order to optimize the audio property of the image display device 100 for the user in the desired position.
The audio property control information calculated by the audio property optimization processing section 142 is sent to the audio property control section 152 of the system control section 122. Like the audio property optimization process, there is a process for optimizing the volume balance of the stereo sound output of speaker section 108 on the right and left sides, and an optimization process related to an effect surrounding the stereo sound output of speaker section 108. One Since the difference is generated in the volume balance on the right and left sides of the stereo sound output of a speaker section 108 depending on the user's position, the optimization process optimizes gains on the right and left sides.
For example,
as shown in Fig. 11, a difference from a reference point is calculated as follows according to a principle that the volume of the stereo sound output from speaker section 108 attenuates in inverse proportion to a square of the distance; the difference (gain_dif_ in volume in speaker section 108 on the right and left sides is calculated, and the volume balance on the right and left sides can be optimized. gain_Lch = 20 * log (d_Lch / d_org_LRch) gain_Rch = 20 * log (d_Rch / d_org_LRch) gain_dif = gain_Rch - gain_Lch = 20 * log (d_Rch) - 20 * log (d_Lch) = 20 * log (d_Lch / d_Rch) Here, note that: gain_Lch: the difference in Lch gain gain_Rch: a difference in Rch gain d_org_LRch: the distance from the right and left speaker sections to the reference point d_Lch: the distance from the Lch speaker section to the user d_Rch: the distance from the Rch speaker section to the user gain_dif: difference in L / R voltage gain
Additionally, if a plurality of users are present, the left and right volume balance of the sound output of the speaker section 108 can optimize the right and left volume balance of the sound output of the speaker section 108 with priority. for a particular user. 5 As a method of calculating the balance center of the user group, when a plurality of users, namely users A, B, C and D as shown in Fig. 12 is present, the calculation can be performed as follows. d_cent_dif = 0; temp_CAM_DIS = 0; if (CAM_AUDIENCE ʈ = 0) {for (int i = 0; i <CAM_AUDIENCE; i ++) {d_cent_dif + = CAM_DIS [i] * tan (CAM_HOR_ANGLE [i] * PI / 180); temp_CAM_DIS + = CAM_DIS [i]; }} d_cent_dif = d_cent_dif / CAM_AUDIENCE; // return (use) value center of the processed balance angle CAM_DIS = temp_CAM_DIS / CAM_AUDIENCE; // returns (uses) value center of the processed balance distance
Here, note that CAM_AUDIENCE: User within the imaging area of the imaging sections 104 CAM_HOR_ANGLE [0] 5: User angle A CAM_HOR_ANGLE [1]: User angle B CAM_HOR_ANGLE [2]: User angle C CAM_HOR_ANGLE [3]: Angle of user D CAM_DIS [0]: Distance of user A CAM_DIS [1]: Distance of user B CAM_DIS [2]: Distance of user C CAM_DIS [3]: Distance of user DA optimization section image property 114 of the system optimization processing section 120 calculates image property control information with respect to the user in the desired position to perform the image property optimization process on the display panel 102 of the image display device 100 based on user position information sent from user position information storage section 117 which is to optimize the image property of the image display device 100. image property calculated by the image property optimization processing section 114 is sent to the property control section 154 of the system control section 122. Like the image property optimization process, there are processes such as gamma correction to optimize an appearance of 5 black, and a correction for color gain to comply with a color change.
For example, gamma correction is performed as follows. γ = 2.2 + image quality correction - 0.1 x user direction In addition, for example, the color gain correction is performed as follows.
Color Gain = User color + image quality correction ± α x user direction R (G, B) Gain = R (G, B) x image quality correction ± α x user direction Additionally, if the plurality of users is present, gamma correction and color gain correction can be performed at the center of the user group balance, or alternatively gamma correction and color gain correction can be performed with priority for a particular user.
Assuming that a position for user A is [D0, Ah0] and a position for user B is [D1, Ah1] in the horizontal direction as shown in Fig. 13 (A), and a position for user A is [D0, Av0] and a user B position is [D1, Av1] in the vertical direction as shown in Fig. 13 (B), an average viewing angle correction coefficient and adjustment data for the system optimization process are calculated using the following formula .
Average viewing angle correction coefficient = {(1 / D0 * (Ah0 + Av0) + 1 / D1 * (Ah1 + Av1) + 1 / Dn * (Ahn + Avn)) / n} * correction value
Adjustment data = (basic data) * {1 + (correction value at maximum viewing angle) * (average viewing angle correction coefficient)} 5 An average viewing angle gives weight to nearby users, and calculates an average of a horizontal angle and a vertical angle for a number of users.
The correction coefficient is calculated by multiplying the correction value at the average viewing angle, and a total correction amount is calculated by multiplying the correction coefficient to the maximum correction value.
The adjustment data is calculated by adding or subtracting the amount of correction to or from the basic data (data without correction: γ = 2.2 + image quality correction). Additionally, in the optimization of the image property, since the optimal value varies depending on the age and sex of the user, the optimization process can be carried out using the attribute information such as the age and sex obtained from the image processing section in addition to the user position information.
For example, a luminance correction of display panel 102 is performed as follows.
BackLight = basic adjustment value * correction value Correction value = 10 ^ (A * log lighting screen + B * log viewing angle + C * log average image level + D * age) / screen lighting Additionally, in a experiment, an optimal luminance is known to have a relationship with the illumination of the screen, viewing angle, average luminance level of the image, and age as follows.
Optimal luminance log = A * log lighting screen + B * log viewing angle + C * average image level log + D * age.
The device direction optimization processing section 146 of the system optimization processing section 120 calculates device direction control information to perform a device direction optimization process to an engine section 109 of the display device. image 100 with respect to the user in the desired position based on the user position information sent from the user position information storage section 117, to optimize the device direction of the image display device 100. The direction control information calculated by device direction optimization processing section 146 is sent to device direction control section 156 of system control section 122. Like the device direction control process, if mechanism section 109 is equipped with a rotating mechanism in the vertical and horizontal direction, the image display device 100 is rotated o so that the front axis [0, 0] of the image display device 100 comes in the direction [[1, θ1] of the user.
Because of this, the display panel 102 of the image display device 100 can be optimized in a direction that is in the head as seen from the user.
The audio property control section 152 of the system control section 122 performs the audio property optimization process based on the audio property control information sent from the audio property optimization processing section 142. For example , the audio property control section 152 controls the volume balance of the sound output of the speaker section 108 based on the audio property control information sent from the audio property optimization processing section 142 Image property control section 154 of system control section 122 performs the image property optimization process based on image property control information sent from the image property optimization processing section
144. For example, the image property control section 154 controls the image formation property of the display panel section 102 based on the image property control information sent from the 5 property optimization processing section image 144. Device direction control section 156 of system control section 122 performs the device direction optimization process based on device direction control information sent from the device direction optimization processing section
146. For example, device direction control section 156 performs control of engine section 109 of image display device 100 based on device direction control information sent from the device direction optimization processing section. 146. As above, the configuration of the control section 110 included in the image display device 100 of the mode of the present invention has been explained with reference to Fig. 3 and Fig. 4. In the following, the optimization process according to position of the user by the image display device 100 of the embodiment of the present invention will be explained. [1-3. Optimization Process according to the User Position] Fig. 8 is a flow diagram showing an example of the optimization process according to the user's position by the image optimization device 100 of the modality of the present invention. In the following, the optimization process according to the user's position by the image display device 100 of the modality of the present invention will be explained with reference to Fig. 8. In Fig. 8, first, when the image forming sections 104 from the image display device 100 starts obtaining an image, the image input section 112 of the control unit 11o receives the image obtained by the imaging sections 104 (step S802). Then, the image processing section 114 of the control section 110 performs the process of detecting the face included in the image 5 received by the image input section 112 (step S804). Then, the viewing state analysis section 116 of control section 110 calculates the user's relative position with respect to the camera's optical axis of the respective image forming sections 104 in the user direction / distance calculation section and calculates the user position with respect to the device center and the front axis of the image display device 100 in the user position information calculation section 134 (step S806). Then, the system optimization processing section 120 of the control section 110 calculates system control information to perform the system optimization process to optimize a state of the image display device 100 with respect to the user in the desired position based on in the user position information calculated in step S806 (step S808). For example, in step S808. The system control information for optimizing the volume balance of the sound output of speaker section 108 on the left and right sides is calculated.
In addition, in step S808, the system control information to perform the process such as gamma correction to optimize the appearance of black and the correction of color gain to obey the color change is calculated.
In addition, in step S808, the system control information to perform the process for optimizing the device direction of the image display device 100 is calculated.
Thus, the system control section 122 of control section 110 performs the system optimization process based on the system control information calculated in step S808 (step S810), and ends this process.
By the optimization process according to the user's position in Fig. 8, the state of the image display device 100 with respect to the user in the desired position can be optimized.
For example, the left and right volume balance of the sound output of the speaker section 108 is optimized, so that the user can watch the image display device 100 without feeling uncomfortable.
In addition, the appearance of black and color change are optimized, so that the user can satisfactorily watch the image displayed on the image display device 100. Additionally, the image display device 100 is optimized for the direction in which it is at the height of the user's head, so that the user can satisfactorily watch the image displayed on the image display device 100. [1-4. Optimization Process according to User Positions of One or More Users] In the following, an optimization process according to positions of one or more users by the image display device 100 of the modality of the present invention will be explained.
Fig. 9 is a flow diagram showing an example of an optimization process according to the positions of one or more users by the image display device 100 of the embodiment of the present invention.
In Fig. 9, first, when the imaging sections 104 of the image display device 100 begin to take an image, the image input section 112 of the control unit 110 receives the image obtained by the imaging section 104, and the image processing section 114 of the control section 110 performs the face detection process included in the image received by the image input section 112 and the like (step S902).
Then, the viewing state analysis section 116 of control section 110 receives a result of the face detection process by the image processing section 114, and determines whether a user's detected number is one or more using the detection result. face 5 by the image processing section 114 (step S904). As a result of determining step S904, if a user's detected number is one, the viewing state analysis section 116 of control section 110 calculates a horizontal angle and a vertical angle of the user (step S960). Thus, the system optimization processing section 120 of the control section 110 calculates a correction coefficient for the system control information to perform the system optimization process based on the horizontal angle and the vertical angle of the user calculated in the step S906 (step S908). As a result of determining step S904, if a user's detected number is a plurality, the viewing state analysis section 116 of control section 110 determines whether the plurality of users is in the center of the image or not (step S910) . As a result of determining step S910, if a plurality of users are not in the center of the image (NOT in step S1910), the viewing state analysis section 116 of control section 110 calculates the horizontal angle, the vertical angle and the distance for the respective ones from the plurality of users, and places them in average (step S912). Additionally, in step S912, the horizontal angle, the vertical angle, and the distance can be calculated from the respective ones of the plurality of users, and positions of the balance center for the plurality of users can be calculated.
Then, the system optimization processing section 120 of the control section 110 calculates the correction coefficient for the system control information to perform the system optimization process (step S914). As a result of the determination of step S910, if the plurality of users in the center of the image (YES in step S910), the system optimization processing section 120 of the control section 110 calculates the correction coefficient without the correction coefficient for system control information for the system optimization process, or by changing a weight in it (step S916). After having calculated the correction coefficients in steps S908, S914, and S916, the system optimization processing section 120 of the control section 110 calculates the system control information by adding the correction coefficients to the basic data of the control information. system for the system optimization process (step S918), and the process ends.
By the optimization process according to the positions of one or more users in Fig. 9, the state of the image display device 100 with respect to the plurality of users in the desired positions can be optimized. [1-5. Optimization Process according to Ages of One or More Users] In the following, an optimization process according to ages of one or more users by the image display device 100 of the modality of the present invention will be explained.
Fig. 10 is a flow diagram showing an example of an optimization process according to the ages of one or more users by the image display device 100 of the embodiment of the present invention.
In Fig. 10, first, when the imaging sections 104 of the image display device 100 start taking an image, the image input section 112 of the control unit 110 receives the image obtained by the imaging section 104, and the image processing section 114 in the control section 110 performs the process of detecting the faces included in the image received by the image input section 112 and the like (step S1002). 5 Then, the view state analysis section 116 of control section 110 receives a result of the face detection process by the image processing section 114, and determines whether a detected number of users is one or more using the result of the face detection by image processing section 114 (step S1004). As a result of determining step S1004, if the user's detected number is one, the viewing state analysis section 116 of control section 110 analyzes the user's age (step S1006). Then, the system optimization processing section 120 of the control section 110 calculates the correction coefficient for the system control information for the system optimization process based on an age analysis result in step S1006 (step S1008 ). As a result of determining step S1004, if the user's detected number is a plurality, the viewing state analysis section 116 of control section 110 analyzes the ages of the respective users (step S1010). Then, the view state analysis section 116 of control section 110 calculates a correction coefficient without correcting the system control information for the system optimization process, or calculates correction coefficients independently, and averages of them ( step S1012). After steps S1008, S1012, the system optimization processing section 120 of control section 110 calculates the system control information by adding the correction coefficient to the basic data of the system control information for the system optimization process (step S1014), and the process ends.
By the optimization process according to the ages of the one or more users in Fig. 10, the status of the display device 100 with respect to the plurality of users of different ages can be optimized.
In addition, in the embodiment, the system optimization processing section 120 can calculate system control information to optimize a font size of a GUI displayed in the display panel section 102 based on user position information.
The system control information calculated by the system optimization processing section 120 is sent to the system control section 122, and the optimization process to optimize the GUI font size is displayed in the display panel section 102 is performed through the system control section 122. As the GUI font size optimization process, as shown in FIG. 14 (A), there is a process for increasing the font size of the GUI displayed in the display panel section 102 when the user approaches the image display device.
In this case, the font size of the GUI is increased when the user approaches the image display device 100, and the font size of the GUI is reduced when the user moves away from the image display device 100. For example, when the GUI font size is small and causes difficulty in reading, the GUI font size increases as the user approaches, so the user can easily recognize the GUI.
Additionally, as the process of optimizing the font size of the GUI, as shown in Fig. 14 (B), there is a process to refine information to be displayed, that is, to increase the amount of information, reducing the font size of the GUI displayed in the display panel section 102 when the user approaches the image display device 100. In this case, the font size of the GUI is reduced and the information to be displayed is increased when the user approaches the image display device 100, and the font size of the GUI is increased when the user moves away from the image display device
100. For example, if the user is far away when a program list is displayed in the display panel section 102, the amount of information is reduced by increasing the font size, and when the user approaches, the amount of information is reduced. increased by reducing the font size. Additionally, in the present embodiment, as shown in Figs. 15 (A) to 15 (C), when calculating users' viewing positions, the variation in face size can be corrected using the reference face size [w0, h0] at the reference distance d0 in a following table of correction. For example, from the assigned information such as the user's age, a data table of an average age face size is predetermined, for example, if the user is a child, the reference face size [w0, h0] can be made to be a face size [w0C, h0C] that is smaller than the reference face size shown in Fig. 15 (C), and if the user is an adult, the reference face size [w0, h0] can be made to be a face size [w0A, h0A] that is larger than the reference face size shown in Fig. 15 (B). Additionally, in the present modality, in the calculation of the user's viewing position, predetermining the user using the image display device 100 on the image display device 100, for example a family of a location of the display device of image 100, the face size of each user can be recorded as a data table. Because of this, the reference face size can be changed for each user. A method for recording the face size for each user can be performed by obtaining images together with distance information in cooperation with another distance sensor (not shown), obtaining images guiding the user at a predetermined distance, or obtaining images at the same distance as a scale to be the reference.
Additionally, in the present modality, even if the user is outside the image formation area of the image formation section 5 104 estimating the user's position outside the image formation area of the chronological transition information, the system optimization process previously mentioned can be continued.
Additionally, in the present modality, in the system optimization process, an appropriate time constant can be adjusted according to a user's viewing environment.
Because of this, even if the user performs a positional step transmission, the system optimization process can be continued.
Note that, the previously mentioned series of processes can be performed by hardware, or can be performed by software.
Having the software to perform the series of processes, a program configuring the software is installed from a program storage medium to a computer installed on dedicated hardware, or for example to a general purpose personal computer that is capable of carrying out various functions installing multiple programs.
The preferred embodiment of the present invention has been described above with reference to the accompanying drawings, but the present invention is not limited to them.
It should be understood by those skilled in the art that various modifications, combinations, subcombination and alterations can occur depending on the design requirements and other factors as long as they are within the scope of the attached or equivalent claims thereof.
Reference Signal List 100 Image Display Device 102 Display Panel Section 102 Image Formation Section
106 Sensor section 108 Speaker section 109 Mechanism section 110 Control section 5 112 Image input section 114 Image processing section 116 Viewing state analysis section 117 User position information storage section 118 Viewing status recording section 120 System optimization processing section 122 System control section 132 User direction / distance calculation section 134 User position information calculation section 142 Property optimization optimization section audio 144 Image property optimization processing section 146 Device direction optimization processing section 152 Audio property control section 154 Image property control section 156 Device direction control section 200 Collection server Dice
权利要求:
Claims (7)
[1]
1. Display device characterized by the fact that it comprises: a section that takes an image from a predetermined range 5 of a dynamic image with respect to an image display direction; an image analysis section that analyzes the dynamic image obtained by the image formation section and calculates a user's position; a system optimization processing section that calculates system control information to optimize a system based on the user's position calculated by the image analysis section; and a system control section that optimizes the system based on the system control information calculated by the system optimization processing section, where the image analysis section analyzes the dynamic image obtained by the image formation section and calculates a position of a particular user, and where the system optimization processing section calculates system control information to optimize a system based on the position of the particular user calculated by the image analysis section.
[2]
2. Display device according to claim 1, characterized by the fact that the system optimization processing section calculates system control information to optimize a sound output volume balance of an audio output section based on in the user's position calculated by the image analysis section.
[3]
3. Display device according to claim 1, characterized by the fact that the system optimization processing section calculates system control information to optimize an image property of an image display section based on the user's position calculated by the image analysis section. 5
[4]
4. Display device according to claim 1, characterized by the fact that the system optimization processing section calculates system control information to optimize display contents of an image display section based on the calculated user position through the image analysis section.
[5]
5. Display device according to claim 1, characterized by the fact that the optimization processing section calculates system control information to optimize a device direction of the display device itself based on the user's position calculated by the section image analysis.
[6]
6. Display device according to claim 1, characterized by the fact that the image analysis section analyzes the dynamic image obtained by the image formation section and calculates a three-dimensional position of the user.
[7]
7. Control method characterized by the fact that it comprises: an image formation step of taking an image from a predetermined range of a dynamic image with respect to an image display direction; an image analysis step of analyzing the obtained dynamic image and calculating a user's position;
a system optimization processing step of calculating system control information to optimize a system based on the user's calculated position; and a system control step to optimize the system based on the calculated system control information, in which the image analysis step analyzes the dynamic image obtained in the image formation step and calculates a particular user position position, and in which the system optimization processing step calculates control system information to optimize a system based on the position of the particular user in the information analysis step.
类似技术:
公开号 | 公开日 | 专利标题
BR112012005231A2|2020-08-04|display device and control method
US8913005B2|2014-12-16|Methods and systems for ergonomic feedback using an image analysis module
TWI398796B|2013-06-11|Pupil tracking methods and systems, and correction methods and correction modules for pupil tracking
BR112012004830B1|2020-10-27|DISPLAY APPLIANCE AND CONTROL METHOD
US10565720B2|2020-02-18|External IR illuminator enabling improved head tracking and surface reconstruction for virtual reality
US10217286B1|2019-02-26|Realistic rendering for virtual reality applications
WO2016129156A1|2016-08-18|Information processing device, information processing method, and program
US10831238B2|2020-11-10|Display method for flexible display screen and flexible display device
BR112012004835A2|2020-07-28|display apparatus and method
EP3136826B1|2019-05-08|Information processing device, information processing method and program
US20150304625A1|2015-10-22|Image processing device, method, and recording medium
US10719059B2|2020-07-21|Systems and methods for control of output from light output apparatus
TW202008780A|2020-02-16|Augmented reality system and color compensation method thereof
JP6915537B2|2021-08-04|Information processing equipment and methods, as well as projection imaging equipment and information processing methods
JP2010266510A|2010-11-25|Mirror system
TWI659334B|2019-05-11|Transparent display device, control method using therefore and controller for thereof
US9635340B2|2017-04-25|Stereo image processing apparatus and method thereof
TW200837611A|2008-09-16|Computer input device, cursor control device capable adjusting resolution and method for controlling same
JP2013074613A5|2014-10-16|
KR20180059294A|2018-06-04|Display device and method of driving the same
TW201931077A|2019-08-01|Laptop computer and gaze-coordinate calibration e method
US10741147B1|2020-08-11|Driving display device with voltage compensation based on load estimation
US20210166437A1|2021-06-03|Compute amortization heuristics for lighting estimation for augmented reality
TWI693592B|2020-05-11|Display device and display method thereof
TWI612445B|2018-01-21|Optical touch apparatus and a method for determining a touch position
同族专利:
公开号 | 公开日
KR20120082406A|2012-07-23|
US20120293405A1|2012-11-22|
CN102687522B|2015-08-19|
US8952890B2|2015-02-10|
EP2472863A4|2013-02-06|
CN102687522A|2012-09-19|
JP2011066516A|2011-03-31|
RU2012108872A|2013-09-20|
EP2472863A1|2012-07-04|
US9489043B2|2016-11-08|
WO2011033855A1|2011-03-24|
US20150185830A1|2015-07-02|
JP5568929B2|2014-08-13|
KR101784754B1|2017-10-16|
RU2553061C2|2015-06-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US4973149A|1987-08-19|1990-11-27|Center For Innovative Technology|Eye movement detector|
JPH07106839B2|1989-03-20|1995-11-15|株式会社日立製作所|Elevator control system|
JPH05137200A|1991-11-14|1993-06-01|Sony Corp|Automatic adjustment device for stereo sound volume balance|
JPH09247564A|1996-03-12|1997-09-19|Hitachi Ltd|Television receiver|
DE19613178A1|1996-04-02|1997-10-09|Heinrich Landert|Method for operating a door system and a door system operating according to the method|
US6603491B2|2000-05-26|2003-08-05|Jerome H. Lemelson|System and methods for controlling automatic scrolling of information on a display or screen|
JP3968477B2|1997-07-07|2007-08-29|ソニー株式会社|Information input device and information input method|
WO1999053728A1|1998-04-13|1999-10-21|Matsushita Electric Industrial Co., Ltd.|Illumination control method and illuminator|
US6215471B1|1998-04-28|2001-04-10|Deluca Michael Joseph|Vision pointer method and apparatus|
US6076928A|1998-06-15|2000-06-20|Fateh; Sina|Ideal visual ergonomic system for computer users|
US6243076B1|1998-09-01|2001-06-05|Synthetic Environments, Inc.|System and method for controlling host system interface with point-of-interest data|
JP3561463B2|2000-08-11|2004-09-02|コナミ株式会社|Virtual camera viewpoint movement control method and 3D video game apparatus in 3D video game|
US7348963B2|2002-05-28|2008-03-25|Reactrix Systems, Inc.|Interactive video display system|
US7627139B2|2002-07-27|2009-12-01|Sony Computer Entertainment Inc.|Computer image and audio processing of intensity and input devices for interfacing with a computer program|
EP1426919A1|2002-12-02|2004-06-09|Sony International GmbH|Method for operating a display device|
AU2003301043A1|2002-12-13|2004-07-09|Reactrix Systems|Interactive directed light/sound system|
US8123616B2|2003-03-25|2012-02-28|Igt|Methods and apparatus for limiting access to games using biometric data|
JP2005044330A|2003-07-24|2005-02-17|Univ Of California San Diego|Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device|
US7117380B2|2003-09-30|2006-10-03|International Business Machines Corporation|Apparatus, system, and method for autonomic power adjustment in an electronic device|
EP1551178A1|2003-12-18|2005-07-06|Koninklijke Philips Electronics N.V.|Supplementary visual display system|
EP1566788A3|2004-01-23|2017-11-22|Sony United Kingdom Limited|Display|
EP1596271A1|2004-05-11|2005-11-16|Hitachi Europe S.r.l.|Method for displaying information and information display system|
JP4734855B2|2004-06-23|2011-07-27|株式会社日立製作所|Information processing device|
RU2370817C2|2004-07-29|2009-10-20|Самсунг Электроникс Ко., Лтд.|System and method for object tracking|
JP4107288B2|2004-12-10|2008-06-25|セイコーエプソン株式会社|Control system, controlled apparatus and remote control apparatus compatible with this system|
JP4899334B2|2005-03-11|2012-03-21|ブラザー工業株式会社|Information output device|
EP1883914B1|2005-05-06|2011-07-06|Omnilink Systems, Inc.|System and method of tracking the movement of individuals and assets|
KR101112735B1|2005-04-08|2012-03-13|삼성전자주식회사|3D display apparatus using hybrid tracking system|
JP4595750B2|2005-08-29|2010-12-08|ソニー株式会社|Image processing apparatus and method, and program|
JP4225307B2|2005-09-13|2009-02-18|船井電機株式会社|Television receiver|
US8218080B2|2005-12-05|2012-07-10|Samsung Electronics Co., Ltd.|Personal settings, parental control, and energy saving control of television with digital video camera|
JP5201999B2|2006-02-03|2013-06-05|パナソニック株式会社|Input device and method thereof|
JP4876687B2|2006-04-19|2012-02-15|株式会社日立製作所|Attention level measuring device and attention level measuring system|
US8340365B2|2006-11-20|2012-12-25|Sony Mobile Communications Ab|Using image recognition for controlling display lighting|
JP2008301167A|2007-05-31|2008-12-11|Sharp Corp|Liquid crystal television receiver|
JP5430572B2|2007-09-14|2014-03-05|インテレクチュアルベンチャーズホールディング67エルエルシー|Gesture-based user interaction processing|
JP5564946B2|2007-09-20|2014-08-06|日本電気株式会社|Video providing system and video providing method|
JP5559691B2|2007-09-24|2014-07-23|クアルコム,インコーポレイテッド|Enhanced interface for voice and video communication|
JP2009094723A|2007-10-05|2009-04-30|Mitsubishi Electric Corp|Television receiver|
WO2009067670A1|2007-11-21|2009-05-28|Gesturetek, Inc.|Media preferences|
US9986293B2|2007-11-21|2018-05-29|Qualcomm Incorporated|Device access control|
JP4702377B2|2008-02-18|2011-06-15|セイコーエプソン株式会社|Control system and controlled device|
JP5169403B2|2008-04-07|2013-03-27|ソニー株式会社|Image signal generation apparatus, image signal generation method, program, and storage medium|
JP2010004118A|2008-06-18|2010-01-07|Olympus Corp|Digital photograph frame, information processing system, control method, program, and information storage medium|
WO2010021373A1|2008-08-22|2010-02-25|ソニー株式会社|Image display device, control method and computer program|
US8400322B2|2009-03-17|2013-03-19|International Business Machines Corporation|Apparatus, system, and method for scalable media output|
JP5263092B2|2009-09-07|2013-08-14|ソニー株式会社|Display device and control method|
JP5556098B2|2009-09-11|2014-07-23|ソニー株式会社|Display method and display device|
JP2011107899A|2009-11-16|2011-06-02|Sony Corp|Information processor, method and program for changing setting|
US8523667B2|2010-03-29|2013-09-03|Microsoft Corporation|Parental control settings based on body dimensions|
JP2012104871A|2010-11-05|2012-05-31|Sony Corp|Acoustic control device and acoustic control method|
US9288387B1|2012-09-11|2016-03-15|Amazon Technologies, Inc.|Content display controls based on environmental factors|JP5263092B2|2009-09-07|2013-08-14|ソニー株式会社|Display device and control method|
JP5418093B2|2009-09-11|2014-02-19|ソニー株式会社|Display device and control method|
US8843346B2|2011-05-13|2014-09-23|Amazon Technologies, Inc.|Using spatial information with device interaction|
JP2013031013A|2011-07-28|2013-02-07|Toshiba Corp|Electronic device, control method of electronic device, and control program of electronic device|
CN102314848A|2011-09-09|2012-01-11|深圳Tcl新技术有限公司|Backlight-control method and system for liquid-crystal display device|
JP5892797B2|2012-01-20|2016-03-23|日本放送協会|Transmission / reception system, transmission / reception method, reception apparatus, and reception method|
KR20130117525A|2012-04-18|2013-10-28|삼성디스플레이 주식회사|Image display system and driving method thereof|
US9753500B2|2012-07-06|2017-09-05|Nec Display Solutions, Ltd.|Display device including presence sensors for detecting user, and display method for the same|
US9286898B2|2012-11-14|2016-03-15|Qualcomm Incorporated|Methods and apparatuses for providing tangible control of sound|
JP6058978B2|2012-11-19|2017-01-11|サターン ライセンシング エルエルシーSaturn Licensing LLC|Image processing apparatus, image processing method, photographing apparatus, and computer program|
EP2927902A4|2012-11-27|2016-07-06|Sony Corp|Display device, display method, and computer program|
US20140153753A1|2012-12-04|2014-06-05|Dolby Laboratories Licensing Corporation|Object Based Audio Rendering Using Visual Tracking of at Least One Listener|
EP2955934B1|2013-02-05|2017-09-20|Toa Corporation|Amplification system|
WO2014126991A1|2013-02-13|2014-08-21|Vid Scale, Inc.|User adaptive audio processing and applications|
US10139925B2|2013-03-04|2018-11-27|Microsoft Technology Licensing, Llc|Causing specific location of an object provided to a device|
DE102013206569B4|2013-04-12|2020-08-06|Siemens Healthcare Gmbh|Gesture control with automated calibration|
WO2015100205A1|2013-12-26|2015-07-02|Interphase Corporation|Remote sensitivity adjustment in an interactive display system|
CN104298347A|2014-08-22|2015-01-21|联发科技(新加坡)私人有限公司|Method and device for controlling screen of electronic display device and display system|
WO2016072128A1|2014-11-04|2016-05-12|ソニー株式会社|Information processing device, communication system, information processing method, and program|
JP2016092765A|2014-11-11|2016-05-23|株式会社リコー|Information processing device, user detection method and program|
CN106713793A|2015-11-18|2017-05-24|天津三星电子有限公司|Sound playing control method and device thereof|
US10291949B2|2016-10-26|2019-05-14|Orcam Technologies Ltd.|Wearable device and methods for identifying a verbal contract|
WO2018155354A1|2017-02-21|2018-08-30|パナソニックIpマネジメント株式会社|Electronic device control method, electronic device control system, electronic device, and program|
EP3396226A1|2017-04-27|2018-10-31|Advanced Digital Broadcast S.A.|A method and a device for adjusting a position of a display screen|
US11011095B2|2018-08-31|2021-05-18|Chongqing Hkc Optoelectronics Technology Co., Ltd.|Display panel, and image control device and method thereof|
US11032508B2|2018-09-04|2021-06-08|Samsung Electronics Co., Ltd.|Display apparatus and method for controlling audio and visual reproduction based on user's position|
CN110503891A|2019-07-05|2019-11-26|太仓秦风广告传媒有限公司|A kind of electronic bill-board transform method and its system based on distance change|
JP2021015203A|2019-07-12|2021-02-12|富士ゼロックス株式会社|Image display device, image forming apparatus, and program|
法律状态:
2020-08-11| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 5/64 , H04N 7/173 Ipc: H04N 5/64 (2006.01), H04N 21/4223 (2011.01), H04N |
2020-08-11| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-12-01| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
JP2009213377A|JP5568929B2|2009-09-15|2009-09-15|Display device and control method|
JP2009-213377|2009-09-15|
PCT/JP2010/062311|WO2011033855A1|2009-09-15|2010-07-22|Display device and control method|
[返回顶部]